摘要 :
Conventional deep image inpainting methods are based on auto-encoder architecture, in which the spatial details of images will be lost in the down-sampling process, leading to the degradation of generated results. Also, the struct...
展开
Conventional deep image inpainting methods are based on auto-encoder architecture, in which the spatial details of images will be lost in the down-sampling process, leading to the degradation of generated results. Also, the structure information in deep layers and texture information in shallow layers of the auto-encoder architecture can not be well integrated. Differing from the conventional image inpainting architecture, we design a parallel multi-resolution inpainting network with multi-resolution partial convolution, in which low-resolution branches focus on the global structure while high-resolution branches focus on the local texture details. All these high- and low-resolution streams are in parallel and fused repeatedly with multi-resolution masked representation fusion so that the reconstructed images are semantically robust and textually plausible. Experimental results show that our method can effectively fuse structure and texture information, producing more realistic results than state-of-the-art methods.
收起
摘要 :
Conventional deep image inpainting methods are based on auto-encoder architecture, in which the spatial details of images will be lost in the down-sampling process, leading to the degradation of generated results. Also, the struct...
展开
Conventional deep image inpainting methods are based on auto-encoder architecture, in which the spatial details of images will be lost in the down-sampling process, leading to the degradation of generated results. Also, the structure information in deep layers and texture information in shallow layers of the auto-encoder architecture can not be well integrated. Differing from the conventional image inpainting architecture, we design a parallel multi-resolution inpainting network with multi-resolution partial convolution, in which low-resolution branches focus on the global structure while high-resolution branches focus on the local texture details. All these high- and low-resolution streams are in parallel and fused repeatedly with multi-resolution masked representation fusion so that the reconstructed images are semantically robust and textually plausible. Experimental results show that our method can effectively fuse structure and texture information, producing more realistic results than state-of-the-art methods.
收起
摘要 :
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground com...
展开
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground compatible with the background, is a promising yet challenging task. However, the lack of high-quality publicly available dataset for image harmonization greatly hinders the development of image harmonization techniques. In this work, we contribute an image harmonization dataset iHarmony4 by generating synthesized composite images based on COCO (resp., Adobe5k, Flickr, day2night) dataset, leading to our HCOCO (resp., HAdobe5k, HFlickr, Hday2night) sub-dataset. Moreover, we propose a new deep image harmonization method DoveNet using a novel domain verification discriminator, with the insight that the foreground needs to be translated to the same domain as background. Extensive experiments on our constructed dataset demonstrate the effectiveness of our proposed method. Our dataset and code are available at https://github.com/bcmi/Image_Harmonization_Datasets.
收起
摘要 :
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground com...
展开
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground compatible with the background, is a promising yet challenging task. However, the lack of high-quality publicly available dataset for image harmonization greatly hinders the development of image harmonization techniques. In this work, we contribute an image harmonization dataset iHarmony4 by generating synthesized composite images based on COCO (resp., Adobe5k, Flickr, day2night) dataset, leading to our HCOCO (resp., HAdobe5k, HFlickr, Hday2night) sub-dataset. Moreover, we propose a new deep image harmonization method DoveNet using a novel domain verification discriminator, with the insight that the foreground needs to be translated to the same domain as background. Extensive experiments on our constructed dataset demonstrate the effectiveness of our proposed method. Our dataset and code are available at https://github.com/bcmi/Image_Harmonization-Datasets.
收起
摘要 :
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground com...
展开
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image. Image harmonization, aiming to make the foreground compatible with the background, is a promising yet challenging task. However, the lack of high-quality publicly available dataset for image harmonization greatly hinders the development of image harmonization techniques. In this work, we contribute an image harmonization dataset iHarmony4 by generating synthesized composite images based on COCO (resp., Adobe5k, Flickr, day2night) dataset, leading to our HCOCO (resp., HAdobe5k, HFlickr, Hday2night) sub-dataset. Moreover, we propose a new deep image harmonization method DoveNet using a novel domain verification discriminator, with the insight that the foreground needs to be translated to the same domain as background. Extensive experiments on our constructed dataset demonstrate the effectiveness of our proposed method. Our dataset and code are available at https://github.com/bcmi/Image_Harmonization_Datasets.
收起
摘要 :
Recent studies show that differentiable architecture search (DARTS) suffers notable instability and collapse issue: skip-connect may gradually dominate the cell, leading to deteri-orating architectures. We conjecture that the domi...
展开
Recent studies show that differentiable architecture search (DARTS) suffers notable instability and collapse issue: skip-connect may gradually dominate the cell, leading to deteri-orating architectures. We conjecture that the domination of skip-connect is due to its superiority in gradient compen-sate. On this foundation, we propose a novel and stable method, called DistillDARTS, to stabilize DARTS by knowl-edge distillation and self-distillation scheme. Specifically, the distillation is able to serve as a substitute for skip-connect and smooth the back-propagated gradient distributions among layers of DARTS. By compensating gradients in shallow lay-ers, our method can relieve the dependence of gradient on skip-connect and hence mitigates the collapse issue. Exten-sive experiments on a range of benchmarks demonstrate that DistillDARTS can obtain sturdy architectures with few skip-connects without additional manual interventions, thus suc-cessfully improving the robustness of DARTS. Due to the im-proved stability, our proposed approach achieves the accuracy of 97.57% on CIFAR-10 and 75.8% on ImageNet.
收起
摘要 :
Leading indicators have been widely used to predict the developing orientation of the economy and provide guidance for investors to make judgements. According to the development trend of financial technology, we explore the applic...
展开
Leading indicators have been widely used to predict the developing orientation of the economy and provide guidance for investors to make judgements. According to the development trend of financial technology, we explore the application of leading indicators from three aspects: optimal transmission, uncertainty visualization, and knowledge graph construction. The optimal transmission theory automatically calculates the cost of matching two time series, which greatly improves the efficiency and accuracy of finding leading indicators. Besides, we propose a visualization method to illustrate the uncertainty of leading indicators, which can extract meaningful information from a wide variety of data and models. Further-more, we propose to build a network of relationships between industries and indicators using knowledge graph. Leading in-dicators and lagging indicators can be effectively discovered through the proposed methods. Experiments verify the feasibility and effectiveness of the proposed optimal transmission theory while the uncertainty visualization model can provide reasonable guidance for investors.
收起
摘要 :
Leading indicators have been widely used to predict the developing orientation of the economy and provide guidance for investors to make judgements. According to the development trend of financial technology, we explore the applic...
展开
Leading indicators have been widely used to predict the developing orientation of the economy and provide guidance for investors to make judgements. According to the development trend of financial technology, we explore the application of leading indicators from three aspects: optimal transmission, uncertainty visualization, and knowledge graph construction. The optimal transmission theory automatically calculates the cost of matching two time series, which greatly improves the efficiency and accuracy of finding leading indicators. Besides, we propose a visualization method to illustrate the uncertainty of leading indicators, which can extract meaningful information from a wide variety of data and models. Further-more, we propose to build a network of relationships between industries and indicators using knowledge graph. Leading in-dicators and lagging indicators can be effectively discovered through the proposed methods. Experiments verify the feasibility and effectiveness of the proposed optimal transmission theory while the uncertainty visualization model can provide reasonable guidance for investors.
收起
摘要 :
Disk and memory faults are the leading causes of server breakdown. A proactive solution is to predict such hardware failure at the runtime and then isolate the hardware at risk and backup the data. However, the current model-based...
展开
Disk and memory faults are the leading causes of server breakdown. A proactive solution is to predict such hardware failure at the runtime and then isolate the hardware at risk and backup the data. However, the current model-based predictors are incapable of using the discrete time-series data, such as the values of device attributes, which conveys high-level information of the device behavior. In this paper, we propose a novel deep-learning based prediction scheme for system-level hardware failure prediction. We normalize the distribution of samples’ attributes from different vendors to make use of diverse training sets. We propose a temporal Convolution Neural Network based model that is insensitive to the noise in the time dimension. Finally, we design a loss function to train the model with extremely imbalanced samples effectively. Experimental results from an open S.M.A.R.T data set and an industrial data set show the effectiveness of the proposed scheme.
收起
摘要 :
Disk and memory faults are the leading causes of server breakdown. A proactive solution is to predict such hardware failure at the runtime and then isolate the hardware at risk and backup the data. However, the current model-based...
展开
Disk and memory faults are the leading causes of server breakdown. A proactive solution is to predict such hardware failure at the runtime and then isolate the hardware at risk and backup the data. However, the current model-based predictors are incapable of using the discrete time-series data, such as the values of device attributes, which conveys high-level information of the device behavior. In this paper, we propose a novel deep-learning based prediction scheme for system-level hardware failure prediction. We normalize the distribution of samples’ attributes from different vendors to make use of diverse training sets. We propose a temporal Convolution Neural Network based model that is insensitive to the noise in the time dimension. Finally, we design a loss function to train the model with extremely imbalanced samples effectively. Experimental results from an open S.M.A.R.T data set and an industrial data set show the effectiveness of the proposed scheme.
收起